Ride-hailing services provides convenient, affordable, and on-demand travel options. However, the rapid expansion of these platforms has introduced security challenges, including illegal driver substitution. This issue arises when unauthorized individuals operate as drivers, jeopardizing passenger safety. Detecting such substitutions remains a significant challenge due to the lack of effective real-time verification mechanisms. Existing systems primarily depend on manual identity verification methods, such as document checks or static photograph comparisons. These methods are susceptible to manipulation and fail to ensure continuous monitoring during rides, leaving significant security gaps. To overcome these limitations, this project introduces a solution that leverages face biometrics and the Vision Transformer model. The system employs real-time facial recognition to verify drivers during rides, utilizing ViT \'s advanced capability to capture intricate facial features with high precision under diverse conditions, including varying lighting, angles, and expressions. A centralized biometric database ensures secure storage. Alerts are triggered in case of unauthorized driver detection, ensuring a prompt response. This approach enhances the security framework of ride-hailing services by delivering continuous, automated, and reliable driver authentication. It mitigates the risks associated with illegal driver substitution, fostering a safer and more trustworthy environment for passengers and service providers.
Introduction
The growth of ride-hailing services like Uber and Ola has transformed urban transport by offering convenience and real-time mobility. However, a major security issue is illegal driver substitution, where unverified individuals operate using registered drivers’ accounts, compromising passenger safety and platform integrity.
To address this, a proposed system introduces real-time facial recognition using Vision Transformer (ViT) models to verify driver identity just before each ride. This approach improves upon static ID checks by offering live, automated authentication, ensuring only verified drivers can operate.
The system uses technologies like Python, Flask, TensorFlow, OpenCV, and is designed to be lightweight and easily integrated into existing platforms. It achieves over 95% accuracy, performs reliably in challenging conditions (e.g., poor lighting), and provides real-time alerts on unauthorized substitutions.
A user-friendly web interface and an admin dashboard enable smooth operations and monitoring. The platform is also scalable and secure, ready for widespread adoption in the ride-hailing industry.
Conclusion
In conclusion,this project delivers a comprehensive and innovative solution to the growing security concerns in ride-hailing services by implementing a real-time facial recognition system powered by the Vision Transformer (ViT) model and the IllFaceNet dataset. Through accurate and continuous driver verification, it effectively detects and prevents illegal driver substitution, enhancing passenger safety and overall platform trust. The integration of essential modules—such as driver registration, OTP verification, and an intuitive admin dashboard—ensures a smooth and secure user experience. With its scalable architecture, real-time alert system, and strong emphasis on data privacy, the project stands as a forward-looking approach to securing urban mobility and setting a new standard for AI-powered transportation solutions.
References
[1] Zhang, X., et al. \"Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks.\" IEEE Signal Processing Letters, vol. 23, no. 10, 2016, pp. 1499-1503.
[2] Yang, J., et al. \"GLCM Based Feature Extraction for Face Recognition.\" 2018 IEEE International Conference on Applied System Innovation (ICASI), 2018, pp. 296-298.
[3] Zhang, K., et al. \"Deep Residual Learning for Image Recognition.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
[4] Liu, W., et al. \"Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers.\" arXiv preprint arXiv:1802.00124, 2018.
[5] Wang, Y., et al. \"Learning Face Age Progression: A Pyramid Architecture of GANs.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 31-39.
[6] M. Fazeen, B. Gozick, R. Dantu, M. Bhukhiya and M. C. Gonzlez, \"Safe Driving Using Mobile Phones\", Intelligent Transportation Systems IEEE Transactions, vol. 13, no. 3, pp. 1462-1468, 2012.
[7] J. Paefgen, F. Kehr, Y. Zhai and F. Michahelles, \"Driving behavior analysis with smartphones: insights from a controlled field study\", Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia (MUM12), 2012.
[8] L. Ma, Y. Zhang, and W. Wang, \"DrivingSense: Dangerous Driving Behavior Identification Based on Smartphone Autocalibration,\" Mobile Information Systems, vol. 2017, Article ID 9075653, pp. 1–11, 2017.
[9] Y. Wang, Z. Li, and X. Liu, \"Early Warning System for Drivers’ Phone Usage with Deep Learning Network,\" EURASIP Journal on Wireless Communications and Networking, vol. 2022, Article 121, pp. 1–14, 2022.
[10] M. S. Hossain, G. Muhammad, and M. Alhamid, \"Driving Event Recognition Using Machine Learning and Smartphones,\" Sensors, vol. 23, no. 4, pp. 1–15, 2023.
[11] Google Self-Driving Car, [online] Available: https://www.google.com/selfdrivingcar.
[12] D. A. Johnson and M. M. Trivedi, \"Driving Style Recognition Using a Smartphone as a Sensor Platform\", IEEE Transactions on Intelligent Transportation Systems, pp. 1609-1615, 2011.
[13] H. Eren, S. Makinist, E. Akin and A. Yilmaz, \"Estimating driving behavior by a smartphone\", Proceedings of IEEE Intelligent Vehicles Symposium (IV), pp. 234-239, 2012.
[14] C. W. You, N. D. Lane, F. Chen, R. Wang, T. J. Bao, M. Montes-de-Oca, et al., \"Carsafe app: Alerting drowsy and distracted drivers using dual cameras on smartphones\", Proceedings of the 11th international conference on Mobile systems applications and services (MobiSys13), 2013.
[15] Emmanouil Koukoumidis, Peh Li-Shiuan and Rose Martonosi Margaret, \"Signalguru: leveraging mobile phones for collaborative traffic signal schedule advisory\", Proceedings of the 9th international conference on Mobile systems applications and services (MobiSys 11), 2011.
[16] FHWA Safety Program - Intersection Safety, [online] Available: http://safety.fhwa.dot.gov/intersection.html.